场景完成是从场景的部分扫描中完成缺失几何形状的任务。大多数以前的方法使用3D网格上的截断签名距离函数(T-SDF)计算出隐式表示,作为神经网络的输入。截断限制,但不会删除由非关闭表面符号引入的模棱两可的案例。作为替代方案,我们提出了一个未签名的距离函数(UDF),称为未签名的加权欧几里得距离(UWED)作为场景完成神经网络的输入表示。 UWED作为几何表示是简单而有效的,并且可以在任何点云上计算,而与通常的签名距离函数(SDF)相比,UWED不需要正常的计算。为了获得明确的几何形状,我们提出了一种从常规网格上离散的UDF值提取点云的方法。我们比较了从RGB-D和LIDAR传感器收集的室内和室外点云上的场景完成任务的不同SDF和UDFS,并使用建议的UWED功能显示了改进的完成。
translated by 谷歌翻译
LIDAR传感器提供有关周围场景的丰富3D信息,并且对于自动驾驶汽车的任务(例如语义细分,对象检测和跟踪)变得越来越重要。模拟激光雷达传感器的能力将加速自动驾驶汽车的测试,验证和部署,同时降低成本并消除现实情况下的测试风险。为了解决以高保真度模拟激光雷达数据的问题,我们提出了一条管道,该管道利用移动映射系统获得的现实世界点云。基于点的几何表示,更具体地说,已经证明了它们能够在非常大点云中准确对基础表面进行建模的能力。我们引入了一种自适应夹层生成方法,该方法可以准确地对基础3D几何形状进行建模,尤其是对于薄结构。我们还通过在GPU上铸造Ray铸造的同时,在有效处理大点云的同时,我们还开发了更快的时间激光雷达模拟。我们在现实世界中测试了激光雷达的模拟,与基本的碎片和网格划分技术相比,表现出定性和定量结果,证明了我们的建模技术的优势。
translated by 谷歌翻译
Paris-Carla-3d是由移动激光器和相机系统构建的几个浓彩色点云的数据集。数据由两组具有来自开源Carla模拟器(700百万分)的合成数据和在巴黎市中获取的真实数据(6000万分),因此Paris-Carla-3d的名称。此数据集的一个优点是在开源Carla模拟器中模拟了相同的LIDAR和相机平台,因为用于生产真实数据的开源Carla Simulator。此外,使用Carla的语义标记的手动注释在真实数据上执行,允许将转移方法从合成到实际数据进行测试。该数据集的目的是提供一个具有挑战性的数据集,以评估和改进户外环境3D映射的困难视觉任务的方法:语义分段,实例分段和场景完成。对于每项任务,我们描述了评估协议以及建立基线的实验。
translated by 谷歌翻译
We present Kernel Point Convolution 1 (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KP-Conv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.
translated by 谷歌翻译
In intensively managed forests in Europe, where forests are divided into stands of small size and may show heterogeneity within stands, a high spatial resolution (10 - 20 meters) is arguably needed to capture the differences in canopy height. In this work, we developed a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map over the "Landes de Gascogne" forest in France, a large maritime pine plantation of 13,000 km$^2$ with flat terrain and intensive management. This area is characterized by even-aged and mono-specific stands, of a typical length of a few hundred meters, harvested every 35 to 50 years. Our deep learning U-Net model uses multi-band images from Sentinel-1 and Sentinel-2 with composite time averages as input to predict tree height derived from GEDI waveforms. The evaluation is performed with external validation data from forest inventory plots and a stereo 3D reconstruction model based on Skysat imagery available at specific locations. We trained seven different U-net models based on a combination of Sentinel-1 and Sentinel-2 bands to evaluate the importance of each instrument in the dominant height retrieval. The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020 with a mean absolute error of 2.02 m on the Test dataset. The best predictions were obtained using all available satellite layers from Sentinel-1 and Sentinel-2 but using only one satellite source also provided good predictions. For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
translated by 谷歌翻译
White matter bundle segmentation is a cornerstone of modern tractography to study the brain's structural connectivity in domains such as neurological disorders, neurosurgery, and aging. In this study, we present FIESTA (FIber gEneration and bundle Segmentation in Tractography using Autoencoders), a reliable and robust, fully automated, and easily semi-automatically calibrated pipeline based on deep autoencoders that can dissect and fully populate WM bundles. Our framework allows the transition from one anatomical bundle definition to another with marginal calibrating time. This pipeline is built upon FINTA, CINTA, and GESTA methods that demonstrated how autoencoders can be used successfully for streamline filtering, bundling, and streamline generation in tractography. Our proposed method improves bundling coverage by recovering hard-to-track bundles with generative sampling through the latent space seeding of the subject bundle and the atlas bundle. A latent space of streamlines is learned using autoencoder-based modeling combined with contrastive learning. Using an atlas of bundles in standard space (MNI), our proposed method segments new tractograms using the autoencoder latent distance between each tractogram streamline and its closest neighbor bundle in the atlas of bundles. Intra-subject bundle reliability is improved by recovering hard-to-track streamlines, using the autoencoder to generate new streamlines that increase each bundle's spatial coverage while remaining anatomically meaningful. Results show that our method is more reliable than state-of-the-art automated virtual dissection methods such as RecoBundles, RecoBundlesX, TractSeg, White Matter Analysis and XTRACT. Overall, these results show that our framework improves the practicality and usability of current state-of-the-art bundling framework
translated by 谷歌翻译
从数字艺术到AR和VR体验,图像编辑和合成已经变得无处不在。为了生产精美的复合材料,需要对相机进行几何校准,这可能很乏味,需要进行物理校准目标。代替传统的多图像校准过程,我们建议使用深层卷积神经网络直接从单个图像中直接从单个图像中推断摄像机校准参数,例如音高,滚动,视场和镜头失真。我们使用大规模全景数据集中自动生成样品训练该网络,从而在标准L2误差方面产生了竞争精度。但是,我们认为将这种标准误差指标最小化可能不是许多应用程序的最佳选择。在这项工作中,我们研究了人类对几何相机校准中不准确性的敏感性。为此,我们进行了一项大规模的人类感知研究,我们要求参与者以正确和有偏见的摄像机校准参数判断3D对象的现实主义。基于这项研究,我们为摄像机校准开发了一种新的感知度量,并证明我们的深校准网络在标准指标以及这一新型感知度量方面都优于先前基于单像的校准方法。最后,我们演示了将校准网络用于多种应用程序,包括虚拟对象插入,图像检索和合成。可以在https://lvsn.github.io/deepcalib上获得我们方法的演示。
translated by 谷歌翻译
结构拓扑优化旨在找到最大化机械性能的最佳物理结构,在航空航天,机械和土木工程中的工程设计应用中至关重要。生成对抗网络(GAN)最近成为传统迭代拓扑优化方法的流行替代品。但是,这些模型通常很难训练,具有有限的概括性,并且由于它们的目标是模仿最佳拓扑,忽视生产性和诸如机械合规性之类的性能目标。我们提出了TopoDiff,这是一种有条件的基于扩散模型的体系结构,以执行克服这些问题的性能感知和可制造性感的拓扑优化。我们的模型介绍了基于替代模型的指导策略,该策略积极利用依从性低和良好的制造性的结构。我们的方法通过将物理性能的平均误差降低了8倍,并且产生的不可行样本少11倍,从而极大地超过了最先进的条件gan。通过将扩散模型引入拓扑优化,我们表明条件扩散模型也具有在工程设计合成应用中的表现。我们的工作还提出了使用扩散模型以及外部性能和约束意识指导的工程优化问题的一般框架。
translated by 谷歌翻译
定义:“雪和冰中的机器人技术”术语是指在可以在其固态中找到水的地区进行研究,开发和使用的机器人系统。这个专业的现场机器人分支研究了与冷环境有关的极端条件对自动驾驶汽车的影响。
translated by 谷歌翻译
本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译